5 research outputs found

    CiThruS2 : Open-source Photorealistic 3D Framework for Driving and Traffic Simulation in Real Time

    Get PDF
    The automotive and transport sector is undergoing a paradigm shift from manual to highly automated driving. This transition is driven by a proliferation of advanced driver assistance systems (ADAS) that seek to provide vehicle occupants with a safe, efficient, and comfortable driving experience. However, increasing the level of automation makes exhaustive physical testing of ADAS technologies impractical. Therefore, the automotive industry is increasingly turning to virtual simulation platforms to speed up time-to-market. This paper introduces the second version of our open-source See-Through Sight (CiThruS) simulation framework that provides a novel photorealistic virtual environment for vision-based ADAS development. Our 3D urban scene supports realistic traffic infrastructure and driving conditions with a plurality of time-of-day, weather, and lighting effects. Different traffic scenarios can be generated with practically any number of autonomous vehicles and pedestrians that can be made to comply with dedicated traffic regulations. All implemented features have been carefully optimized and the performance of our lightweight simulator exceeds 4K (3840 × 2160) rendering speed of 60 frames per second when run on NVIDIA GTX 1060 graphics card or equivalent consumer-grade hardware. Photorealistic graphics rendering and real-time simulation speed make our proposal suitable for a broad range of applications, including interactive driving simulators, visual traffic data collection, virtual prototyping, and traffic flow management.acceptedVersionPeer reviewe

    Camera localization and 3D surface reconstruction on low-power embedded devices

    Get PDF
    This Thesis explores the opportunities for real-time camera localization and 3D surface reconstruction on an embedded device and demonstrates one practical implementation of such. Previous implementations are analyzed, and their usability on embedded platforms is discussed. The importance of accurate and fast localization in modern and future applications is considered and taken into account in the practical implementation of the system. 3D localization and surface reconstruction can be utilized in a vast number of use cases. Some of the more prevalent use cases are its use in advanced robotics, security and military applications, geo scanning, aviation industry, and the entertainment sector. The recent advancements in extender reality and mobile devices have accelerated the adoption of high-performance localization even further. In its core, the problem of 3D localization involves inferring the position and rotation of the device both in the local case in reference to the last few frames and in the global case in reference to all of the previous frames and reconstructed 3D landmarks. Augmenting the localization problem with the reconstruction of robust 3D point clouds and a surface adds additional constraints to the requirements. Mainly, the importance of both local and global camera pose consistency is accentuated due to the triangulation of the camera-space 2D image features into world-space 3D points necessitating the fulfillment of the cheirality constraint. Additionally, deviations in the camera poses induces unwanted noise into the point surface and causes cumulative distortions in the form of the 3D surface. The implemented 3D localization and reconstruction system utilizes various simultaneous localization and mapping techniques for localizing the camera and a diverse set of structure-from-motion algorithms for reconstructing the real-world in virtual space. Concepts from edge computing and mobile robotics are used in speeding up the reconstruction and visualization workflow. On a high level, the system consists of eight (8) stages: 2D feature detection and matching, camera localization, landmark triangulation, wireless point cloud streaming, point cloud structuration, Poisson 3D surface reconstruction, and 3D visualization. The algorithms involved are examined in detail and considered from the viewpoint of embedded and power constrained devices. Appropriate measures for optimization are taken when pertinent, and the performance of the system in various scenarios is quantified by the use of performance metrics. The system is shown to be usable in real-world applications, and the obtained reconstruction results are compared against state-of-the-art open-source and academic solutions. The system is open-source under the MIT license and available on GitHub

    Open3DGen : Open-source software for reconstructing textured 3D models from RGB-D images

    Get PDF
    This paper presents the first entirely open-source and cross-platform software called Open3DGen for reconstructing photorealistic textured 3D models from RGB-D images. The proposed software pipeline consists of nine main stages: 1) RGB-D acquisition; 2) 2D feature extraction; 3) camera pose estimation; 4) point cloud generation; 5) coarse mesh reconstruction; 6) optional loop closure; 7) fine mesh reconstruction; 8) UV unwrapping; and 9) texture projection. This end-To-end scheme combines multiple state-of-The-Art techniques and provides an easy-To-use software package for real-Time 3D model reconstruction and offline texture mapping. The main innovation lies in various Structure-from-Motion (SfM) techniques that are used with additional depth data to yield high-quality 3D models in real-Time and at low cost. The functionality of Open3DGen has been validated on AMD Ryzen 3900X CPU and Nvidia GTX1080 GPU. This proof-of-concept setup attains an average processing speed of 15 fps for 720p (1280x720) RGBD input without the offline backend. Our solution is shown to provide competitive 3D mesh quality and execution performance with the state-of-The-Art commercial and academic solutions.acceptedVersionPeer reviewe

    Demo : CiThruS Traffic Scene Simulator

    Get PDF
    This paper describes the main features of our open-source CiThruS simulation environment and a demonstration setup for it. This lightweight simulator is designed for 360-degree traffic imaging at arbitrary positions in the city. It is built with Unity using the open Windridge City Asset, which we populated with autonomous vehicles and pedestrians. The vehicles navigate the city by following predetermined and user-customizable nodes on the map. They can also detect other vehicles, pedestrians, and traffic lights for simple collision avoidance and a smoother traffic flow. The pedestrians walk on the sidewalks and stop at the traffic lights when crossing streets. Weather, time-of-day, and lens effects bring the environment closer to the reality. In the demonstration, the simulation is controlled with an Xbox controller and run in real time on a consumer-grade laptop equipped with an Intel Core i7 4-core CPU and Nvidia GTX 1060 GPU.acceptedVersionPeer reviewe

    Open-source CiThruS simulation environment for real-time 360-degree traffic imaging

    Get PDF
    This paper presents an open-source simulation environment for 360-degree traffic imaging. The environment is built on the openly available AirSim Windridge City Asset. In this work, the city is populated with custom autonomous vehicles and pedestrians. The vehicles navigate along a designed node map that can be manually placed on the roads according to the specified traffic regulations. The vehicles are also made to detect other vehicles, pedestrians, and traffic lights for simple collision avoidance and smoother traffic flows in intersections. The pedestrians follow a NavMesh placed on the walkable areas and stop at the traffic lights when crossing the streets. Weather effects, time-of-day, and rain distortion lens shader bring the environment more close to the reality. The whole system is built on top of free and self-made assets, making it easy to use, configure, and extend. The performance of the simulator exceeds 60 frames per second when run on NVIDIA RTX 2070 with Intel Xeon E5-2620 or equivalent hardware.acceptedVersionPeer reviewe
    corecore